Exploring dense matching between the current frame and past frames for long-range context modeling, memory-based methods have demonstrated impressive results in video object segmentation (VOS) recently. Nevertheless, due to the lack of instance understanding ability, the above approaches are oftentimes brittle to large appearance variations or viewpoint changes resulted from the movement of objects and cameras. In this paper, we argue that instance understanding matters in VOS, and integrating it with memory-based matching can enjoy the synergy, which is intuitively sensible from the definition of VOS task, \ie, identifying and segmenting object instances within the video. Towards this goal, we present a two-branch network for VOS, where the query-based instance segmentation (IS) branch delves into the instance details of the current frame and the VOS branch performs spatial-temporal matching with the memory bank. We employ the well-learned object queries from IS branch to inject instance-specific information into the query key, with which the instance-augmented matching is further performed. In addition, we introduce a multi-path fusion block to effectively combine the memory readout with multi-scale features from the instance segmentation decoder, which incorporates high-resolution instance-aware features to produce final segmentation results. Our method achieves state-of-the-art performance on DAVIS 2016/2017 val (92.6% and 87.1%), DAVIS 2017 test-dev (82.8%), and YouTube-VOS 2018/2019 val (86.3% and 86.3%), outperforming alternative methods by clear margins.
translated by 谷歌翻译
Although weakly-supervised techniques can reduce the labeling effort, it is unclear whether a saliency model trained with weakly-supervised data (e.g., point annotation) can achieve the equivalent performance of its fully-supervised version. This paper attempts to answer this unexplored question by proving a hypothesis: there is a point-labeled dataset where saliency models trained on it can achieve equivalent performance when trained on the densely annotated dataset. To prove this conjecture, we proposed a novel yet effective adversarial trajectory-ensemble active learning (ATAL). Our contributions are three-fold: 1) Our proposed adversarial attack triggering uncertainty can conquer the overconfidence of existing active learning methods and accurately locate these uncertain pixels. {2)} Our proposed trajectory-ensemble uncertainty estimation method maintains the advantages of the ensemble networks while significantly reducing the computational cost. {3)} Our proposed relationship-aware diversity sampling algorithm can conquer oversampling while boosting performance. Experimental results show that our ATAL can find such a point-labeled dataset, where a saliency model trained on it obtained $97\%$ -- $99\%$ performance of its fully-supervised version with only ten annotated points per image.
translated by 谷歌翻译
Spurious correlations in training data often lead to robustness issues since models learn to use them as shortcuts. For example, when predicting whether an object is a cow, a model might learn to rely on its green background, so it would do poorly on a cow on a sandy background. A standard dataset for measuring state-of-the-art on methods mitigating this problem is Waterbirds. The best method (Group Distributionally Robust Optimization - GroupDRO) currently achieves 89\% worst group accuracy and standard training from scratch on raw images only gets 72\%. GroupDRO requires training a model in an end-to-end manner with subgroup labels. In this paper, we show that we can achieve up to 90\% accuracy without using any sub-group information in the training set by simply using embeddings from a large pre-trained vision model extractor and training a linear classifier on top of it. With experiments on a wide range of pre-trained models and pre-training datasets, we show that the capacity of the pre-training model and the size of the pre-training dataset matters. Our experiments reveal that high capacity vision transformers perform better compared to high capacity convolutional neural networks, and larger pre-training dataset leads to better worst-group accuracy on the spurious correlation dataset.
translated by 谷歌翻译
In real teaching scenarios, an excellent teacher always teaches what he (or she) is good at but the student is not. This method gives the student the best assistance in making up for his (or her) weaknesses and becoming a good one overall. Enlightened by this, we introduce the approach to the knowledge distillation framework and propose a data-based distillation method named ``Teaching what you Should Teach (TST)''. To be specific, TST contains a neural network-based data augmentation module with the priori bias, which can assist in finding what the teacher is good at while the student are not by learning magnitudes and probabilities to generate suitable samples. By training the data augmentation module and the generalized distillation paradigm in turn, a student model that has excellent generalization ability can be created. To verify the effectiveness of TST, we conducted extensive comparative experiments on object recognition (CIFAR-100 and ImageNet-1k), detection (MS-COCO), and segmentation (Cityscapes) tasks. As experimentally demonstrated, TST achieves state-of-the-art performance on almost all teacher-student pairs. Furthermore, we conduct intriguing studies of TST, including how to solve the performance degradation caused by the stronger teacher and what magnitudes and probabilities are needed for the distillation framework.
translated by 谷歌翻译
The mainstream of the existing approaches for video prediction builds up their models based on a Single-In-Single-Out (SISO) architecture, which takes the current frame as input to predict the next frame in a recursive manner. This way often leads to severe performance degradation when they try to extrapolate a longer period of future, thus limiting the practical use of the prediction model. Alternatively, a Multi-In-Multi-Out (MIMO) architecture that outputs all the future frames at one shot naturally breaks the recursive manner and therefore prevents error accumulation. However, only a few MIMO models for video prediction are proposed and they only achieve inferior performance due to the date. The real strength of the MIMO model in this area is not well noticed and is largely under-explored. Motivated by that, we conduct a comprehensive investigation in this paper to thoroughly exploit how far a simple MIMO architecture can go. Surprisingly, our empirical studies reveal that a simple MIMO model can outperform the state-of-the-art work with a large margin much more than expected, especially in dealing with longterm error accumulation. After exploring a number of ways and designs, we propose a new MIMO architecture based on extending the pure Transformer with local spatio-temporal blocks and a new multi-output decoder, namely MIMO-VP, to establish a new standard in video prediction. We evaluate our model in four highly competitive benchmarks (Moving MNIST, Human3.6M, Weather, KITTI). Extensive experiments show that our model wins 1st place on all the benchmarks with remarkable performance gains and surpasses the best SISO model in all aspects including efficiency, quantity, and quality. We believe our model can serve as a new baseline to facilitate the future research of video prediction tasks. The code will be released.
translated by 谷歌翻译
Expressing empathy is important in everyday conversations, and exploring how empathy arises is crucial in automatic response generation. Most previous approaches consider only a single factor that affects empathy. However, in practice, empathy generation and expression is a very complex and dynamic psychological process. A listener needs to find out events which cause a speaker's emotions (emotion cause extraction), project the events into some experience (knowledge extension), and express empathy in the most appropriate way (communication mechanism). To this end, we propose a novel approach, which integrates the three components - emotion cause, knowledge graph, and communication mechanism for empathetic response generation. Experimental results on the benchmark dataset demonstrate the effectiveness of our method and show that incorporating the key components generates more informative and empathetic responses.
translated by 谷歌翻译
Recent works have impressively demonstrated that there exists a subnetwork in randomly initialized convolutional neural networks (CNNs) that can match the performance of the fully trained dense networks at initialization, without any optimization of the weights of the network (i.e., untrained networks). However, the presence of such untrained subnetworks in graph neural networks (GNNs) still remains mysterious. In this paper we carry out the first-of-its-kind exploration of discovering matching untrained GNNs. With sparsity as the core tool, we can find \textit{untrained sparse subnetworks} at the initialization, that can match the performance of \textit{fully trained dense} GNNs. Besides this already encouraging finding of comparable performance, we show that the found untrained subnetworks can substantially mitigate the GNN over-smoothing problem, hence becoming a powerful tool to enable deeper GNNs without bells and whistles. We also observe that such sparse untrained subnetworks have appealing performance in out-of-distribution detection and robustness of input perturbations. We evaluate our method across widely-used GNN architectures on various popular datasets including the Open Graph Benchmark (OGB).
translated by 谷歌翻译
Covid-19在大流行的不同阶段对公众构成了不成比例的心理健康后果。我们使用一种计算方法来捕获引发在线社区对大流行的焦虑的特定方面,并研究这些方面如何随时间变化。首先,我们使用主题分析在R/covid19 \ _support的Reddit帖子样本($ n $ = 86)中确定了九个焦虑(SOA)。然后,我们通过在手动注释的样本($ n $ = 793)上训练Reddit用户的焦虑来自动将SOA标记在较大的年代样本中($ n $ = 6,535)。 9个SOA与最近开发的大流行焦虑测量量表中的项目保持一致。我们观察到,在大流行的前八个月,Reddit用户对健康风险的担忧仍然很高。尽管案件激增稍后发生,但这些担忧却大大减少了。通常,随着大流行的进展,用户的语言披露了SOA的强烈强度。但是,在本研究涵盖的整个期间,人们对心理健康的担忧和未来稳步增长。人们还倾向于使用更强烈的语言来描述心理健康问题,而不是健康风险或死亡问题。我们的结果表明,尽管Covid-19逐渐削弱,但由于适当的对策而逐渐削弱了作为健康威胁,但该在线小组的心理健康状况并不一定会改善。我们的系统为人口健康和流行病学学者奠定了基础,以及时检查引起大流行焦虑的方面。
translated by 谷歌翻译
这项工作提出了下一代人类机器人界面,只能通过视觉来推断和实现用户的操纵意图。具体而言,我们开发了一个集成了近眼跟踪和机器人操作的系统,以实现用户指定的操作(例如,抓取,拾取和位置等),在其中将视觉信息与人类的注意合并在一起,以创建为所需的映射机器人动作。为了实现视力指导的操纵,开发了一个头部安装的近眼跟踪设备,以实时跟踪眼球运动,以便可以确定用户的视觉注意力。为了提高抓地力性能,然后开发出基于变压器的GRASP模型。堆叠的变压器块用于提取层次特征,其中在每个阶段扩展了通道的体积,同时挤压了特征地图的分辨率。实验验证表明,眼球跟踪系统产生低的凝视估计误差,抓地力系统在多个握把数据集上产生有希望的结果。这项工作是基于凝视互动的辅助机器人的概念证明,该机器人具有巨大的希望,可以帮助老年人或上肢残疾在日常生活中。可在\ url {https://www.youtube.com/watch?v=yuz1hukyurm}上获得演示视频。
translated by 谷歌翻译
有效的探索对于具有稀疏奖励或高维状态行动空间的环境中的加固学习代理至关重要。基于国家访问的数量,好奇心和熵最大化的最新作品产生了固有的奖励信号,以激励代理人参观新颖的国家进行探索。但是,代理可能会因包含新颖但任务含量信息的传感器输入的扰动而分心,例如由于传感器噪声或背景变化。在这项工作中,我们通过对时间序列观察中的测试和压缩顺序预测信息进行建模和压缩顺序预测信息,介绍了为学习压缩和时间连贯表示的顺序信息瓶颈目标。为了在嘈杂的环境中有效探索,我们进一步构建了内在的奖励,这些奖励基于学习的表示,以捕获与任务相关的状态新颖性。我们得出了顺序信息瓶颈目标的变异上限,以实用优化,并提供了对派生的上限的信息理论解释。我们对一组基于图像的模拟控制任务进行的实验表明,与基于好奇心,熵最大化和信息获得的最新方法相比,我们的方法可实现更好的样品效率,以及对白噪声和自然视频背景的鲁棒性和鲁棒性。 。
translated by 谷歌翻译